1
Easy2Siksha
GNDU Question Paper-2021
Ba/BSc 5
th
Semester
COMPUTER SCIENCE
(Database Management System & Oracle)
Time Allowed: 3 Hrs. Maximum Marks: 75
Note: Attempt Five questions in all, selecting at least One question from each section. The
Fifth question may be attempted from any section.
SECTION-A
1.(a) Discuss the components of database management system. 8
(b) Describe the features of normalisation in detail.
2. Explain the following for DBMS:
(a) Relational Algebra
(b) Role of Network Model.
SECTION-B
3. What is meant by concurrency control? Discuss the role of locks in detail. 15
4. Explain the following concepts:
(a) Database Security
(b) Expert Systems.
2
Easy2Siksha
SECTION-C
5. Discuss object oriented features of Oracle 10g used for the development of DBMS.
6. Discuss the basic structure of DDL AND DML to demonstrate the working of SQL script.
SECTION-D
7.(a) What are types of cursors? Explain the working of implicit cursor by taking some
suitable examples.
(b) Explain the basic structure used for a procedure using an example code snippet to
justify the answer.
8. Explain the following concepts for PL/SQL:
(a) Database triggers
(b) Explicit cursor.
Note: This Answer Paper is totally Solved by Ai (Artificial Intelligence) So if You find Any Error Or Mistake . Give us a
Feedback related Error , We will Definitely Try To solve this Problem Or Error.
3
Easy2Siksha
GNDU Answer Paper-2021
Ba/BSc 5
th
Semester
COMPUTER SCIENCE
(Database Management System & Oracle)
Time Allowed: 3 Hrs. Maximum Marks: 75
Note: Attempt Five questions in all, selecting at least One question from each section. The
Fifth question may be attempted from any section.
SECTION-A
1.(a) Discuss the components of database management system.
Ans: Components of a Database Management System (DBMS)
A Database Management System (DBMS) is software that helps you store, manage, and
retrieve data from a database. It acts as a bridge between the database and the users or
applications that need to access that data. In simple terms, it's like a librarian that helps you
organize and find books in a library. The DBMS makes it easier to handle data by providing
tools for creating, managing, and querying databases.
To understand how a DBMS works, let's break down its key components:
1. Database Engine
The database engine is the core of the DBMS. It is responsible for processing requests from
users to retrieve, update, delete, or insert data into the database. Think of it as the "brain"
of the system that performs the actual operations on the data. The engine ensures that data
is stored correctly and efficiently, and it handles tasks like indexing, query optimization, and
transaction management.
Key functions of the database engine include:
Storage Management: It handles how data is physically stored on disk, including
organizing it into tables and managing the file system.
Query Processing: The engine processes user queries written in SQL (Structured
Query Language) and retrieves the requested data.
4
Easy2Siksha
Transaction Management: It ensures that database transactions (like adding or
deleting records) are processed reliably, following rules like Atomicity, Consistency,
Isolation, and Durability (ACID).
2. Database Schema
The schema is like a blueprint or design of the database. It defines the structure of the
database, including the tables, columns, data types, and relationships between tables. A
schema helps you organize data logically and ensures that the data is consistent.
For example, in a library database, you might have a schema that defines a "Books" table
with columns like Title, Author, and ISBN. You could also have a "Members" table with
columns like Name, Membership ID, and Contact Information. The schema ensures that
these tables are properly linked so that you can, for example, track which books are
borrowed by which members.
The schema can be of different types:
Physical Schema: Defines how data is stored physically on the hardware.
Logical Schema: Defines how data is organized logically, such as tables and
relationships.
View Schema: Defines the way data is viewed by users or applications.
3. Database Manager
The database manager is responsible for managing the overall functioning of the database.
It coordinates various components of the DBMS, ensuring that everything runs smoothly.
The manager oversees tasks like:
Access Control: Ensures that only authorized users can access or modify the data.
Concurrency Control: Manages multiple users accessing the database at the same
time, preventing conflicts like two users trying to modify the same data
simultaneously.
Backup and Recovery: Handles backing up the data to prevent data loss and
restoring data in case of system failures.
In simple terms, the database manager is like the supervisor that makes sure everything in
the database is working as it should, and that users can access data without causing
problems.
4. Query Processor
The query processor is like a translator that takes a user's query (typically written in SQL)
and turns it into commands that the database engine can understand and execute. It
performs several tasks:
Query Parsing: It checks the query for syntax errors and ensures that it follows the
correct format.
5
Easy2Siksha
Query Optimization: It finds the most efficient way to execute the query, making
sure it uses the least amount of resources and takes the least amount of time.
Query Execution: It sends the optimized query to the database engine for execution
and returns the results to the user.
For example, if you ask the DBMS to find all books by a certain author, the query processor
will make sure your query is valid, figure out the fastest way to get the information, and
then retrieve the data from the database.
5. Database Users
There are different types of users who interact with the DBMS, and they have different roles
and access levels:
Database Administrators (DBAs): These are the people responsible for managing the
entire database system. They set up the database, manage user access, and ensure
that the system is secure and running smoothly.
Application Programmers: These users write programs or applications that interact
with the database. They use the DBMS to develop software that requires data
storage and retrieval.
End Users: These are the people who use the database to perform tasks like entering
data, running queries, or generating reports. End users can be further divided into:
o Casual Users: Users who occasionally interact with the database through
applications.
o Naive Users: Users who perform specific tasks using applications without
needing to know the underlying database structure.
6. Data Dictionary
The data dictionary, also known as the system catalog, is a centralized repository of
information about the database. It contains metadata, which is data about data. The data
dictionary stores details like the structure of the database, constraints, relationships, and
access rights. It's like a reference guide for the DBMS that tells it how the data is organized
and how it should be managed.
For example, the data dictionary might contain information about the columns in the
"Books" table, including their data types (like text, numbers, etc.) and any rules or
constraints (like making sure ISBN numbers are unique).
7. File Manager
The file manager is responsible for handling the physical storage of data on the disk. It
manages the allocation of space on the disk, reads and writes data, and ensures that files
are stored efficiently. The file manager works closely with the storage management part of
the database engine to make sure that data is organized and accessed quickly.
6
Easy2Siksha
8. Communication Interface
The communication interface allows users or applications to interact with the DBMS. It
provides tools for entering queries, updating data, and retrieving information. The
communication interface can be in the form of a graphical user interface (GUI) that allows
users to perform actions by clicking buttons, or a command-line interface (CLI) where users
type commands.
For example, a banking application might have a GUI that lets users check their account
balance, transfer money, or view transaction history. The communication interface makes it
easy for users to interact with the database without needing to know how it works
internally.
9. Security Management
Security management is a crucial component of a DBMS, ensuring that only authorized users
can access the database. It helps protect sensitive data from unauthorized access, breaches,
or misuse. Security management involves:
User Authentication: Verifying the identity of users before they are allowed to
access the database.
Access Control: Limiting the actions that users can perform based on their roles and
permissions. For example, some users may be able to view data but not modify it,
while others may have full access.
Encryption: Protecting data by converting it into a format that can only be read by
authorized users.
In summary, the components of a Database Management System work together to provide
a powerful, reliable, and secure way to manage data. The database engine handles the core
operations, while the schema defines the structure of the data. The database manager
ensures smooth functioning, the query processor optimizes and executes queries, and the
users interact with the database through various interfaces. The file manager organizes
physical storage, the data dictionary stores metadata, and security management protects
the system from unauthorized access. Together, these components form a complete system
that makes it easier to store, retrieve, and manage data in a structured and efficient
manner.
(b) Describe the features of normalisation in detail.
Ans: Normalization in a Database Management System (DBMS) is a process used to organize
data in a database. The primary goal of normalization is to reduce redundancy and ensure
data integrity by structuring the database into well-defined tables.
7
Easy2Siksha
1. What is Normalization?
Normalization is the process of organizing data in a database into smaller, related tables to
avoid duplication, minimize redundancy, and ensure data consistency. By splitting data into
different tables, normalization helps prevent issues like data anomalies, which can occur
when the same data is repeated across multiple locations.
2. Why is Normalization Important?
When a database isn't normalized, it may contain redundant data, meaning the same
information is stored in multiple places. This leads to several problems, including:
Update Anomalies: If you update data in one place but forget to update it
elsewhere, the database can become inconsistent.
Insert Anomalies: Adding new data can be difficult or require unnecessary
repetition.
Delete Anomalies: Deleting data might remove necessary information that is
repeated elsewhere.
Normalization helps in avoiding these problems by organizing the data properly.
3. The Different Levels (Forms) of Normalization
Normalization is carried out in stages known as "normal forms." There are several normal
forms, and each addresses specific issues. The most common normal forms are:
1st Normal Form (1NF)
Definition: A table is in 1NF when it contains only atomic (indivisible) values,
meaning each cell in a table holds a single value.
Example: Imagine a table that lists customer names and their phone numbers. If one
cell contains multiple phone numbers, the table is not in 1NF. To bring it to 1NF, you
would need to create a separate row for each phone number.
Before 1NF:
Customer Name
Phone Number
Alice
12345, 67890
After 1NF:
Customer Name
Phone Number
Alice
12345
Alice
67890
8
Easy2Siksha
Purpose: Ensures that the data in each column is unique and non-repeating. It makes
the data more organized and easy to manage.
2nd Normal Form (2NF)
Definition: A table is in 2NF if it is already in 1NF and all non-key attributes are fully
dependent on the primary key. In other words, no column should depend on only a
part of the primary key.
Example: Suppose you have a table with a composite key (a key made of multiple
columns), and some columns depend on only one part of the key. In 2NF, we
separate those columns into a new table.
Before 2NF:
Order ID
Product ID
Product Name
1
101
Laptop
After 2NF:
o Order Table:
Order ID
Product ID
1
101
o Product Table:
Product ID
Product Name
101
Laptop
Purpose: 2NF ensures that the data is divided into tables where all columns are fully
dependent on the primary key, eliminating partial dependencies.
3rd Normal Form (3NF)
Definition: A table is in 3NF if it is in 2NF and there are no transitive dependencies.
This means that no column should depend on another non-key column.
Example: If one column depends on another column, instead of the primary key, we
need to separate that into a new table.
9
Easy2Siksha
Before 3NF:
Category ID
Category Name
1
Electronics
After 3NF:
o Product Table:
Product ID
Category ID
101
1
o Category Table:
Category ID
Category Name
1
Electronics
Purpose: 3NF further reduces redundancy by ensuring that non-key columns only
depend on the primary key and not on other non-key columns.
Boyce-Codd Normal Form (BCNF)
Definition: BCNF is a stricter version of 3NF. A table is in BCNF if it is in 3NF, and for
every functional dependency (where one column determines another), the left side
of the dependency must be a superkey (a key that uniquely identifies a row).
Example: BCNF handles cases where 3NF might not fully eliminate redundancy,
especially in tables with complex keys.
4th Normal Form (4NF)
Definition: A table is in 4NF if it is in BCNF and has no multi-valued dependencies. A
multi-valued dependency occurs when one column determines multiple values
independently.
10
Easy2Siksha
Example: If a student can take multiple courses, and each course can have multiple
instructors, a multi-valued dependency exists. 4NF requires splitting these into
separate tables.
Before 4NF:
Student ID
Course
Instructor
1
Math
Dr. Smith
1
Science
Dr. Johnson
After 4NF:
o Student-Course Table:
Student ID
Course
1
Math
1
Science
o Course-Instructor Table:
Course
Instructor
Math
Dr. Smith
Science
Dr. Johnson
Purpose: 4NF eliminates multi-valued dependencies, ensuring that each piece of
data is stored in its own table.
5th Normal Form (5NF)
Definition: A table is in 5NF if it is in 4NF and cannot be further divided without
losing data. 5NF handles cases where complex relationships exist between columns,
and the table can be decomposed into smaller tables without losing information.
Example: If a table involves multiple relationships between several columns, 5NF
ensures that the table is broken down in a way that preserves all relationships.
11
Easy2Siksha
4. Advantages of Normalization
Normalization has several benefits, including:
Reduced Data Redundancy: By splitting data into smaller tables, normalization
eliminates the need for repeating the same data in multiple places, reducing
redundancy.
Improved Data Integrity: Since data is stored in a structured way, normalization
ensures that changes are consistent across the database, improving data integrity.
Efficient Data Management: Normalized databases are easier to manage and
update, as data is organized into clear, logical groups.
Prevention of Anomalies: Normalization prevents update, insert, and delete
anomalies, ensuring that the database remains accurate and consistent.
5. Disadvantages of Normalization
Although normalization has many advantages, it also comes with some drawbacks:
Increased Complexity: Normalization can make database design more complex, as
data is spread across multiple tables.
Performance Impact: In highly normalized databases, retrieving data might require
joining many tables, which can impact performance, especially in large databases.
Difficult for Beginners: Understanding and implementing normalization can be
challenging for those new to database design.
6. Denormalization
Sometimes, databases are denormalized to improve performance. Denormalization involves
combining tables to reduce the number of joins needed to retrieve data. While this can
improve performance, it also reintroduces redundancy and the risk of data anomalies.
Therefore, denormalization is typically used in cases where performance is a higher priority
than reducing redundancy.
7. Normalization in Real-World Applications
E-Commerce Websites: In an e-commerce website, normalization is used to separate
data about customers, products, orders, and payments. This ensures that the same
product information isn’t repeated across different orders, and any updates to
product details are reflected consistently.
Hospital Management Systems: In hospital management systems, normalization is
used to organize patient information, doctor schedules, and treatment records into
separate tables. This prevents duplication of data and ensures that any updates are
accurate across the system.
12
Easy2Siksha
Banking Systems: In banking, normalization ensures that customer information,
account details, and transaction records are stored in separate tables. This helps
maintain data consistency and reduces redundancy.
Conclusion
Normalization is a crucial concept in database design that ensures data is organized
efficiently, reducing redundancy and ensuring data integrity. By breaking down data into
related tables and applying the rules of normalization, databases become easier to manage,
more consistent, and less prone to errors. While normalization can introduce some
complexity and performance considerations, its benefits make it a fundamental practice in
creating reliable and effective databases.
2. Explain the following for DBMS:
(a) Relational Algebra
Ans: Relational Algebra in DBMS
Introduction to Relational Algebra
Relational algebra is a fundamental concept in database management systems (DBMS) that
forms the theoretical basis for manipulating and querying relational databases. Think of
relational algebra as the language of operations that allows you to retrieve and manage
data in databases efficiently. It’s like a set of instructions that you can give to a database to
get the information you need.
Relational algebra operates on relations, which are basically tables in a database. Each table
consists of rows (called tuples) and columns (called attributes). The operations in relational
algebra take one or more tables as input and produce a new table as output.
In simpler terms, relational algebra provides a way to ask the database questions (queries)
using a set of predefined operations, and the answers are provided in the form of new
tables.
Let’s break down the key operations in relational algebra in a simplified manner.
1. Basic Operations of Relational Algebra
Relational algebra is composed of several basic operations that can be combined to form
more complex queries. These operations can be broadly categorized as follows:
1. Selection (σ)
2. Projection (π)
3. Union ()
4. Set Difference (−)
13
Easy2Siksha
5. Cartesian Product (×)
6. Rename (ρ)
1.1. Selection (σ)
Purpose: The selection operation is used to filter rows from a table based on a
specific condition.
Example: Suppose you have a table named Students with columns like StudentID,
Name, and Age. If you want to select all students who are older than 18, you
would use the selection operation to filter the rows based on the condition Age > 18.
In relational algebra notation, it looks like this:
Explanation: The σ symbol stands for selection, and the condition Age > 18 is applied
to the Students table. The result will be a new table containing only the rows where
students are older than 18.
1.2. Projection (π)
Purpose: The projection operation is used to select specific columns (attributes)
from a table.
Example: If you only need the Name and Age columns from the Students table, you
would use the projection operation to project only those columns.
In relational algebra notation, it looks like this:
Explanation: The π symbol stands for projection, and it extracts only the Name and
Age columns from the Students table. The result will be a new table containing only
those two columns.
1.3. Union ()
Purpose: The union operation combines two tables with the same structure into
one, removing any duplicate rows.
Example: Suppose you have two tables, StudentsInCourseA and StudentsInCourseB,
each containing the same columns StudentID and Name. If you want a list of all
students in either course, you would use the union operation to combine the tables.
14
Easy2Siksha
In relational algebra notation, it looks like this:
Explanation: The symbol represents union, and it combines the two tables,
removing duplicates, to create a new table with all students from both courses.
1.4. Set Difference (−)
Purpose: The set difference operation finds rows that are in one table but not in
another.
Example: If you want to find students who are in StudentsInCourseA but not in
StudentsInCourseB, you would use the set difference operation.
In relational algebra notation, it looks like this:
Explanation: The − symbol represents set difference, and it returns a new table
containing students who are in StudentsInCourseA but not in StudentsInCourseB.
1.5. Cartesian Product (×)
Purpose: The Cartesian product operation combines two tables by pairing every row
of the first table with every row of the second table.
Example: Suppose you have a Students table and a Courses table. If you want to
create all possible combinations of students and courses, you would use the
Cartesian product operation.
In relational algebra notation, it looks like this:
Explanation: The × symbol represents Cartesian product, and it pairs every row from
the Students table with every row from the Courses table to create a new table with
all possible combinations of students and courses.
1.6. Rename (ρ)
Purpose: The rename operation is used to give a new name to a table or its columns.
Example: If you want to rename the Students table to EnrolledStudents, you would
use the rename operation.
15
Easy2Siksha
In relational algebra notation, it looks like this:
Explanation: The ρ symbol represents rename, and it assigns the new name
EnrolledStudents to the Students table. This is useful when you need to refer to the
table under a different name in a complex query.
2. Advanced Operations of Relational Algebra
In addition to the basic operations, relational algebra also includes some advanced
operations that are built on top of the basic ones. These advanced operations allow for
more complex queries and data manipulation.
1. Join
2. Intersection
3. Division
2.1. Join
Purpose: The join operation combines rows from two tables based on a related
column.
Types of Join:
o Theta Join: Combines rows based on a specific condition.
o Equi Join: Combines rows where columns from both tables are equal.
o Natural Join: Combines rows with equal values in common columns and
removes duplicates.
Example: Suppose you have a Students table and a Courses table, both containing a
common column StudentID. If you want to combine information from both tables
based on the StudentID column, you would use the join operation.
In relational algebra notation, it might look like this:
Explanation: The symbol represents join, and it combines the rows from the
Students and Courses tables where the StudentID columns match.
2.2. Intersection (∩)
Purpose: The intersection operation finds rows that are common to both tables.
16
Easy2Siksha
Example: If you have two tables, StudentsInCourseA and StudentsInCourseB, and
you want to find students who are enrolled in both courses, you would use the
intersection operation.
In relational algebra notation, it looks like this:
Explanation: The ∩ symbol represents intersection, and it returns a new table
containing only the rows that are present in both StudentsInCourseA and
StudentsInCourseB.
2.3. Division
Purpose: The division operation is used when you want to find rows in one table that
match all rows in another table.
Example: Suppose you have a Students table and a CoursesCompleted table. If you
want to find students who have completed all courses, you would use the division
operation.
In relational algebra notation, it might look like this:
Explanation: The ÷ symbol represents division, and it returns a new table containing
students who have completed all the courses listed in the CoursesCompleted table.
3. Why Relational Algebra is Important
Relational algebra is crucial for understanding how queries are processed in a DBMS. When
you write a query using SQL (Structured Query Language), the DBMS internally converts that
query into relational algebra operations to retrieve the data.
Understanding relational algebra helps you write more efficient queries because you know
how the DBMS processes them. It also forms the foundation for query optimization, where
the DBMS tries to find the best way to execute your query.
4. Combining Operations
Relational algebra operations can be combined to form more complex queries. For example,
you can first select specific rows using the selection operation, then project certain columns,
and finally join the result with another table.
Here’s an example of combining operations:
Scenario: Find the names of all students who are older than 18 and are enrolled in a
specific course.
17
Easy2Siksha
Steps:
1. Use the selection operation to filter students older than 18.
2. Use the join operation to combine the Students and Enrollments tables based
on StudentID.
3. Use the projection operation to select the Name column from the result.
In relational algebra notation, it might look like this:
7. Summary
Relational Algebra is a fundamental concept in database theory that provides a set of
operations to manipulate and query data stored in relational databases. The key operations
of relational algebra include selection, projection, union, set difference, Cartesian product,
and join. These operations form the backbone of many query languages, including SQL, and
understanding them helps in optimizing queries and improving database performance.
Though relational algebra can be complex and mathematical, it plays a crucial role in the
world of databases, enabling us to retrieve and manipulate data in powerful and efficient
ways. As you continue studying DBMS, grasping the concepts of relational algebra will give
you a deeper understanding of how databases work behind the scenes, ultimately helping
you become more proficient in managing and querying databases.
This explanation should give you a clear and simple understanding of relational algebra and
its importance in DBMS.
(b) Role of Network Model.
Ans: Role of the Network Model in DBMS
The Network Model is one of the key data models in the field of Database Management
Systems (DBMS). To understand its role, let’s break down the concepts in a simplified way.
1. What is a Data Model?
A data model defines how data is stored, organized, and manipulated within a database.
Think of it as a blueprint that helps in designing the structure of a database. There are
several types of data models, and each one has its own way of organizing data. The Network
Model is one of these, alongside others like the Hierarchical Model, Relational Model, and
Object-Oriented Model.
18
Easy2Siksha
2. Introduction to the Network Model
The Network Model was developed in the 1960s by Charles Bachman and was widely used
before the Relational Model became dominant. It is an extension of the earlier Hierarchical
Model, offering more flexibility in terms of how data can be related to one another.
In the Network Model, data is organized using a graph structure, where entities (or records)
are nodes, and the relationships between them are represented by edges (or links). This
model allows more complex relationships than the Hierarchical Model, which only supports
a tree-like structure.
For example, imagine a scenario where you are managing data about students and courses.
In a Hierarchical Model, you would only be able to define one-to-many relationships (like
one student enrolled in many courses), but the Network Model allows many-to-many
relationships (like many students enrolled in many courses).
3. Role and Importance of the Network Model
The role of the Network Model in DBMS is to facilitate complex relationships among data.
Here’s how it plays a crucial role:
1. Handling Complex Relationships: The Network Model excels in handling complex
relationships between different data entities. It allows multiple parent-child
relationships, which means that one record can be linked to multiple records,
creating a web of connections. This is particularly useful when dealing with real-
world scenarios where data isn’t always organized in a strict hierarchy. For example,
in a company database, an employee might work on multiple projects, and each
project could have multiple employees. The Network Model efficiently handles these
many-to-many relationships.
2. Flexibility in Data Structure: Unlike the Hierarchical Model, which restricts data to a
tree structure, the Network Model offers greater flexibility. It allows records to have
multiple relationships and links, which means that the same piece of data can be
related to multiple other pieces of data. This flexibility makes it easier to represent
complex data structures without duplication.
19
Easy2Siksha
3. Efficiency in Data Retrieval: The Network Model is efficient in terms of data
retrieval, especially when the relationships between data are complex. Since the
model is based on a graph structure, navigating between related records is quick and
straightforward. In cases where you need to retrieve data by following links between
records, the Network Model can be faster than other models, like the Relational
Model.
4. Supports Many-to-Many Relationships: One of the key features of the Network
Model is its ability to handle many-to-many relationships directly. For instance, in a
university database, students can enroll in multiple courses, and each course can
have many students. The Network Model can easily represent this kind of
relationship without needing complex join operations, which would be required in
the Relational Model.
5. Data Integrity: The Network Model helps maintain data integrity by ensuring that
relationships between data are explicitly defined. Since the model requires that each
record’s relationships are predefined and stored in the database, there’s less risk of
losing important connections between data. This is particularly important in
scenarios where the integrity of relationships is critical, such as in financial databases
or supply chain management systems.
6. Supports Multiple Paths to Data: In the Network Model, data can be accessed
through multiple paths, thanks to its graph structure. This means that there are
often several ways to retrieve the same piece of data, depending on how the
relationships are defined. For example, in a social networking site database, you
might want to access a user’s data either through their friend connections or
through their posts. The Network Model’s flexibility allows for such multiple access
paths, making data retrieval more versatile.
7. Efficient for Certain Types of Applications: The Network Model is particularly well-
suited for applications where data relationships are complex and need to be
frequently accessed, such as telecommunications networks, transportation systems,
and inventory management systems. In these types of applications, the efficiency of
the Network Model in handling multiple relationships and connections makes it a
preferred choice.
4. Real-World Example of the Network Model
Let’s consider an example to illustrate the role of the Network Model in a real-world
scenario:
Imagine a banking system where customers have multiple accounts, and each account can
be associated with multiple customers (e.g., joint accounts). Additionally, customers can
have relationships with multiple banks, and banks can have relationships with multiple
customers. This is a complex scenario with many-to-many relationships.
In a Hierarchical Model, representing this would be difficult, as it only supports one-to-many
relationships (e.g., one customer can have multiple accounts). However, in the Network
20
Easy2Siksha
Model, you can represent the relationships between customers, accounts, and banks using a
graph structure. This allows the system to handle the complex relationships efficiently and
ensures that data retrieval (e.g., finding all accounts associated with a customer) is quick
and straightforward.
5. How the Network Model Works
The Network Model works by organizing data into records, which are then connected
through relationships called sets. A set consists of one owner (parent) and multiple
members (children), similar to the parent-child relationships in the Hierarchical Model.
However, unlike the Hierarchical Model, a child in the Network Model can have more than
one parent, allowing for more complex data relationships.
Record: A record represents an entity, such as a customer, account, or product. Each
record contains data fields that store information about that entity.
Set: A set defines a relationship between two records. It consists of one owner
record and multiple member records. For example, in a banking system, a set might
represent the relationship between a customer and their accounts, with the
customer being the owner and the accounts being the members.
Owner and Member: In each set, the owner is the parent record, and the members
are the child records. However, in the Network Model, a child can be part of multiple
sets, meaning it can have multiple parents. This flexibility allows for complex
relationships to be represented without duplicating data.
6. Advantages of the Network Model
The Network Model offers several advantages, particularly in situations where data
relationships are complex and need to be accessed efficiently:
1. Flexibility: The Network Model allows for flexible data organization by supporting
many-to-many relationships and multiple access paths to data.
2. Efficiency: The graph structure of the Network Model makes data retrieval efficient,
especially when navigating between related records.
3. Data Integrity: By explicitly defining relationships between records, the Network
Model helps maintain data integrity and reduces the risk of losing important
connections.
4. Reduced Data Redundancy: Since the Network Model allows for complex
relationships without duplicating data, it helps reduce redundancy and ensures that
data is stored more efficiently.
7. Challenges of the Network Model
Despite its advantages, the Network Model also has some challenges that limit its use in
modern DBMS:
21
Easy2Siksha
1. Complexity: The Network Model’s flexibility can also make it complex to design and
manage. Defining and maintaining the relationships between records requires
careful planning and management, which can be challenging in large systems.
2. Lack of Standardization: Unlike the Relational Model, which has a well-defined
standard (SQL), the Network Model lacks a standardized query language. This can
make it more difficult to work with, especially for developers who are used to
working with SQL-based relational databases.
3. Limited Adoption: With the rise of the Relational Model and its widespread
adoption, the Network Model has become less popular over time. Most modern
DBMS systems are based on the Relational Model, and the Network Model is
primarily used in specialized applications.
8. Conclusion
The Network Model plays a significant role in DBMS, particularly in scenarios where data
relationships are complex and need to be efficiently managed. Its flexibility in handling
many-to-many relationships, efficient data retrieval, and support for multiple access paths
make it a powerful tool for certain types of applications. However, its complexity and lack of
standardization have led to its decline in popularity, with the Relational Model becoming
the dominant approach in modern databases.
Despite this, the Network Model remains an important part of the history of database
management and continues to be used in specific industries where its strengths can be fully
utilized. Understanding the Network Model provides valuable insights into how data can be
organized and managed in complex systems, offering an alternative to more rigid data
models.
SECTION-B
3. What is meant by concurrency control? Discuss the role of locks in detail.
Ans: Concurrency Control in Database Management Systems
When we talk about concurrency control in a Database Management System (DBMS), we’re
dealing with situations where multiple users or processes are accessing and modifying data
at the same time. A DBMS needs to make sure that these concurrent actions do not lead to
problems, like inconsistent data or errors. Let’s break this down step by step.
1. What is Concurrency?
Concurrency refers to the situation where two or more database transactions are happening
at the same time. Think of a database as a library where people can check out or return
books. If two people try to check out the same book at the exact same time, we need a
22
Easy2Siksha
system to manage this situation. Similarly, if two people are trying to make changes to the
same data in a database simultaneously, the DBMS needs to handle this to avoid conflicts.
2. Why Do We Need Concurrency Control?
Concurrency control ensures that the database remains consistent, even when multiple
transactions are happening at the same time. Imagine you’re working on a group project
where everyone is editing a shared document. If one person deletes a section while another
is making changes to that same section, the document might get messed up. Concurrency
control is like a set of rules that prevent such problems in databases.
Without proper concurrency control, the following issues might arise:
1. Lost Updates: When two transactions simultaneously update the same data, one of
the updates might get lost.
2. Dirty Read: One transaction reads data that another transaction has written but not
yet committed (finalized). If the second transaction is rolled back, the first
transaction ends up with incorrect data.
3. Uncommitted Dependency: A transaction waits for another transaction to complete
before proceeding, causing delays and potential deadlocks.
4. Inconsistent Retrievals: If one transaction retrieves data while another is updating it,
the retrieved data might be inconsistent or incorrect.
Concurrency control helps to prevent these problems by managing how and when
transactions are allowed to interact with the data.
3. How Does Concurrency Control Work?
There are different techniques used in DBMS to control concurrency:
1. Locks
2. Timestamp Ordering
3. Optimistic Concurrency Control
In this explanation, we'll focus on Locks, as it is one of the most common methods used in
concurrency control.
The Role of Locks in Concurrency Control
Locks play a critical role in ensuring that transactions don’t interfere with each other. A lock
is like a "Do Not Disturb" sign. When one transaction locks a piece of data, it’s telling other
transactions, “Hey, I’m working on this. Please wait until I’m done.”
Locks can be applied at various levels within the database, such as on a specific row of a
table, an entire table, or even the whole database, depending on the DBMS’s capabilities
and the needs of the system.
23
Easy2Siksha
1. Types of Locks
There are different types of locks, each designed for specific scenarios:
1. Shared Lock (S Lock): This type of lock allows multiple transactions to read the same
data simultaneously. Think of it as letting multiple people read a book at the same
time. However, no one is allowed to make any changes to the data while the shared
lock is in place.
2. Exclusive Lock (X Lock): This lock allows only one transaction to read or write the
data. It’s like having a book checked out to only one person; no one else can read or
modify the book while it’s locked.
3. Intent Lock: These locks signal the intention of acquiring a more specific lock on a
lower level of the data hierarchy (like locking a row after locking the table). It helps
in avoiding conflicts when multiple transactions are working on different parts of the
data.
4. Update Lock: An update lock is a hybrid lock that allows a transaction to read a piece
of data with the intention of updating it later. It prevents other transactions from
acquiring an exclusive lock on that data but allows shared locks. Once the
transaction is ready to update, the update lock can be escalated to an exclusive lock.
2. How Locks Work
Here’s how locks manage concurrent transactions in a DBMS:
Acquiring a Lock: Before a transaction can access a piece of data, it must request
and acquire the appropriate lock (either shared or exclusive). The DBMS checks if the
lock is available and grants it if no other transaction is holding a conflicting lock.
Holding a Lock: Once a transaction holds a lock, it can safely perform its operations
(read or write) without interference from other transactions.
Releasing a Lock: After the transaction completes its operations, it releases the lock,
allowing other transactions to access the data. The release typically happens when
the transaction is either committed (finalized) or rolled back (undone).
Locks are essential in preventing the problems we discussed earlier, like lost updates or dirty
reads. By controlling access to the data, locks ensure that transactions are isolated from one
another, maintaining the database’s consistency.
3. Locking Protocols
Locks need to be managed systematically to ensure that they prevent conflicts without
causing unnecessary delays. Different protocols or rules govern how locks are used:
1. Two-Phase Locking (2PL): This is one of the most commonly used locking protocols.
It divides the transaction into two phases:
24
Easy2Siksha
o Growing Phase: The transaction acquires all the locks it needs, but it can’t
release any lock.
o Shrinking Phase: Once the transaction starts releasing locks, it cannot acquire
any new ones.
The Two-Phase Locking protocol ensures that once a transaction releases a lock, it’s
committed to its final state. This avoids inconsistencies by preventing other transactions
from accessing partially updated data.
2. Strict Two-Phase Locking: This is a stricter version of the Two-Phase Locking
protocol. In this protocol, the transaction holds all its exclusive locks until it commits
or rolls back. This further reduces the chances of issues like dirty reads but can
increase waiting times for other transactions.
3. Deadlock Handling in Locking: A deadlock occurs when two or more transactions are
waiting for each other to release locks. This situation can lead to a standstill, where
none of the transactions can proceed. To handle deadlocks, the DBMS uses different
strategies:
o Deadlock Prevention: Ensuring that transactions follow a predefined order to
acquire locks so that circular waiting is avoided.
o Deadlock Detection and Resolution: The DBMS regularly checks for
deadlocks and, if found, aborts one of the transactions to break the cycle.
Advantages and Disadvantages of Locking
Locks are powerful tools for managing concurrency, but they come with their own set of
advantages and disadvantages:
Advantages:
Consistency: Locks ensure that the database remains consistent even when multiple
transactions are happening at the same time.
Isolation: Locks provide transaction isolation, meaning that the actions of one
transaction do not affect another until it’s complete.
Disadvantages:
Performance Overhead: Managing locks requires additional resources, and locking
large amounts of data can slow down the system.
Deadlocks: Improper use of locks can lead to deadlocks, where transactions are
stuck waiting for each other, halting progress.
Reduced Parallelism: Locks can reduce the ability to process transactions in parallel,
which can lead to increased waiting times for users.
25
Easy2Siksha
Conclusion
Concurrency control is a crucial aspect of database management systems, ensuring that
multiple transactions can occur simultaneously without causing data inconsistency or errors.
Locks are one of the most effective methods for achieving concurrency control, preventing
conflicts between transactions. By carefully managing access to the data, locks ensure that
the database remains reliable and consistent, even in environments where many users are
interacting with it at the same time.
Understanding how locks work and the different types of locking protocols can help
database administrators and developers optimize their systems for both performance and
reliability. Balancing the need for concurrency with the need for data integrity is key to
successful database management.
4. Explain the following concepts:
(a) Database Security
(b) Expert Systems.
Ans: (a) Database Security
Database security is all about protecting the data stored in databases from unauthorized
access, misuse, or damage. In simple terms, it’s like locking your data in a safe to ensure that
only the right people can see or use it. Let's break it down:
1. Why is Database Security Important?
Protecting Sensitive Data: Databases often store critical information like financial
records, personal data, and confidential business details. If this information falls into
the wrong hands, it could lead to identity theft, financial losses, or other serious
problems.
Compliance with Laws and Regulations: Governments around the world have laws
that require companies to protect certain types of data, like personal information. If
a company fails to protect its data, it can face legal penalties.
Maintaining Trust: If a company’s database is breached, it could lose the trust of its
customers or clients, which could damage its reputation and business.
2. Types of Threats to Database Security
There are various ways databases can be attacked or compromised:
Unauthorized Access: When someone without permission gains access to the
database. This could happen due to weak passwords, shared login credentials, or
vulnerabilities in the system.
26
Easy2Siksha
SQL Injection: This is a type of attack where an attacker inputs malicious SQL code
into a form or URL, tricking the database into executing commands it shouldn’t. This
can allow attackers to view, delete, or alter the data in the database.
Insider Threats: Sometimes, the danger comes from within the organization.
Employees with access to the database might misuse their privileges to steal or
manipulate data.
Malware and Viruses: Just like regular computer systems, databases can be infected
with malware that can steal or corrupt data.
3. Database Security Measures
There are several strategies and technologies used to protect databases:
Access Control: This means ensuring that only authorized users can access the
database. It’s like having a lock on your door that only certain people have the key
to. Access control often involves:
o User Authentication: Verifying that users are who they claim to be, usually
through usernames, passwords, or biometric data like fingerprints.
o Role-Based Access Control (RBAC): Assigning users different levels of access
based on their roles within the organization. For example, a regular employee
might only be able to view data, while an administrator can add, delete, or
change data.
Encryption: This involves encoding data so that it can only be read by someone with
the correct decryption key. Even if an attacker manages to steal the data, they won’t
be able to understand it without the key.
Firewalls and Intrusion Detection Systems (IDS): Firewalls act like barriers that
protect the database from unauthorized traffic, while IDS systems monitor for
suspicious activity that could indicate an attack.
Regular Audits and Monitoring: Regularly reviewing logs and monitoring database
activity can help detect any unusual behavior or potential breaches early on.
Backup and Recovery: In case of an attack or system failure, having regular backups
ensures that the data can be restored without significant loss.
4. Database Security Best Practices
To ensure the highest level of security, organizations should follow best practices such as:
Strong Password Policies: Encouraging or enforcing the use of complex passwords
that are difficult to guess.
Patch Management: Regularly updating database software to fix any vulnerabilities
that could be exploited by attackers.
27
Easy2Siksha
Educating Employees: Training staff on how to recognize phishing attempts, the
importance of keeping their credentials secure, and other security awareness topics.
Least Privilege Principle: Giving users the minimum level of access they need to
perform their jobs. This limits the potential damage if an account is compromised.
Database security is essential to protecting the information that drives businesses and
organizations. Without it, sensitive data could be stolen, altered, or destroyed, causing
severe financial and reputational damage.
(b) Expert Systems
Expert Systems are computer programs that mimic the decision-making abilities of a human
expert. Imagine having a doctor, a lawyer, or a financial advisor available 24/7, always ready
to give you expert advice—that’s what an expert system does, but in a digital form.
1. What are Expert Systems?
An expert system is a type of Artificial Intelligence (AI) that uses a set of rules and
knowledge to solve problems that would normally require human expertise. These systems
are designed to simulate the thought process of a human expert in a particular field. They
can provide advice, make decisions, and even diagnose problems.
2. How Do Expert Systems Work?
Expert systems operate using two main components:
Knowledge Base: This is like the brain of the expert system. It contains all the
information, rules, and facts that the system needs to make decisions. For example,
in a medical expert system, the knowledge base would include data on symptoms,
diseases, treatments, and medical guidelines.
Inference Engine: This is the part of the expert system that applies the rules and
logic from the knowledge base to the specific situation at hand. It’s like a problem
solver that looks at the available information and figures out the best course of
action.
The system works by asking the user a series of questions (similar to how a doctor asks
about symptoms), then uses its knowledge base and inference engine to provide a
recommendation or diagnosis.
3. Types of Expert Systems
There are several types of expert systems, each designed for different kinds of tasks:
Rule-Based Systems: These systems rely on a set of "if-then" rules. For example, if a
patient has a fever and a cough, then the system might conclude that they have the
flu. Rule-based systems are straightforward but can become complex when dealing
with many rules.
28
Easy2Siksha
Fuzzy Logic Systems: Unlike rule-based systems that deal with clear "yes or no"
answers, fuzzy logic systems handle uncertainty and approximate reasoning. For
instance, instead of saying "this patient has a fever," a fuzzy logic system might say
"this patient probably has a mild fever."
Neural Networks: These are more advanced expert systems that can learn from data
over time. They mimic the way the human brain works by recognizing patterns in
data and improving their performance with experience.
4. Applications of Expert Systems
Expert systems are used in a variety of fields where specialized knowledge is essential:
Medical Diagnosis: Expert systems can assist doctors in diagnosing diseases by
analyzing patient data and comparing it to their knowledge base. These systems help
in suggesting treatments or identifying possible conditions.
Financial Analysis: In finance, expert systems can help in analyzing stock markets,
predicting financial trends, or even managing investment portfolios.
Engineering: Engineers use expert systems to troubleshoot equipment, optimize
designs, or simulate various scenarios to find the best solution.
Legal Advice: Legal expert systems can help in interpreting laws, preparing legal
documents, or advising on potential outcomes in court cases.
5. Advantages of Expert Systems
There are several benefits to using expert systems:
Consistency: Unlike human experts, expert systems always provide consistent
advice. They don’t get tired, distracted, or influenced by emotions.
Availability: Expert systems can work 24/7, offering advice and solutions at any time
of the day, which is particularly useful in critical situations like medical emergencies
or financial crises.
Cost-Effective: Instead of hiring a human expert for every problem, businesses can
use expert systems to provide expert-level advice at a lower cost.
Scalability: Expert systems can handle a large volume of queries simultaneously,
making them scalable solutions for industries that need to deal with many customers
or patients.
6. Limitations of Expert Systems
Despite their benefits, expert systems also have some limitations:
Lack of Creativity: Expert systems can only provide solutions based on their pre-
programmed knowledge. They can’t think outside the box or come up with
innovative solutions like a human might.
29
Easy2Siksha
Dependence on Quality of Data: The effectiveness of an expert system is directly
related to the quality of the information in its knowledge base. If the data is
outdated or incomplete, the system’s advice may be incorrect.
Inability to Learn: Traditional expert systems don’t learn from new experiences.
They need to be updated manually with new rules and information. However, more
advanced systems like neural networks do have learning capabilities.
7. The Future of Expert Systems
As technology advances, expert systems are becoming more sophisticated. Integration with
machine learning and big data allows these systems to learn from vast amounts of
information and improve over time. In the future, expert systems may become even more
powerful, offering insights and advice in ways that we can’t yet imagine.
In summary, expert systems are valuable tools that replicate human expertise to solve
complex problems. While they have some limitations, their ability to provide consistent,
reliable, and cost-effective advice makes them indispensable in many industries.
SECTION-C
5. Discuss object oriented features of Oracle 10g used for the development of DBMS.
Ans: Object-Oriented Features of Oracle 10g for DBMS Development: Simplified Explanation
Oracle 10g is a powerful version of the Oracle Database Management System (DBMS) that
includes several object-oriented features. These features enable developers to organize and
manage data in a more intuitive and efficient way, especially when dealing with complex
systems. In this simplified explanation, we'll explore the key object-oriented features of
Oracle 10g that make it easier to develop and manage databases.
1. Introduction to Object-Oriented Programming (OOP)
Before diving into Oracle 10g’s features, it's important to understand what object-oriented
programming (OOP) means. OOP is a programming model that organizes software design
around data, or "objects," rather than functions and logic. Key principles of OOP include:
Encapsulation: Bundling the data and methods that operate on the data within one
unit (an object).
Inheritance: Allowing new classes to inherit properties and methods from existing
ones.
30
Easy2Siksha
Polymorphism: Allowing objects to be treated as instances of their parent class,
making the code more flexible.
Abstraction: Hiding complex implementation details and showing only the necessary
features.
Oracle 10g incorporates these OOP principles into database development, enabling
developers to handle more complex data types and relationships.
2. Object Types
One of the most important features of Oracle 10g is the support for object types. An object
type is a user-defined data structure that allows you to group related data in a single unit,
similar to a class in OOP. It can include attributes (data fields) and methods (functions that
operate on the data).
Example: If you're designing a database for a school, you could create an object type
called Student, with attributes like name, age, and grade, and methods like
calculateGPA().
This makes it easier to represent real-world entities directly in the database, making your
design more intuitive.
3. Object Tables
In Oracle 10g, you can store instances of an object type in object tables. These are similar to
regular relational tables, but instead of rows and columns, they store objects. Each row in
an object table represents an object of a specific type.
Example: Using the Student object type, you could create a StudentTable to store
multiple student objects. Each row would be an instance of a Student with its
associated attributes and methods.
This allows you to model complex data more effectively, especially when your data has
multiple related attributes.
4. Inheritance in Object Types
Oracle 10g supports inheritance, allowing you to create new object types based on existing
ones. This is particularly useful when you have a hierarchy of related data types.
Example: You could create a base object type called Person with attributes like name
and age. Then, you could create a Student type that inherits from Person and adds
additional attributes like grade and methods like calculateGPA().
Inheritance simplifies your database design by allowing you to reuse existing structures and
extend them as needed.
31
Easy2Siksha
5. Polymorphism
Oracle 10g supports polymorphism, which means that methods in object types can behave
differently based on the object they are operating on. This allows you to define methods in
a base class and then override them in derived classes.
Example: You might have a Person object with a displayInfo() method that shows the
person's name. The Student object, which inherits from Person, could override
displayInfo() to show the name along with the student's grade.
This feature makes your database more flexible and allows for more sophisticated data
manipulation.
6. Object Views
Object views are another powerful feature of Oracle 10g. They allow you to represent data
stored in relational tables as objects, without actually changing the underlying relational
structure. This provides a bridge between relational and object-oriented models.
Example: If your school database has separate relational tables for Student, Course,
and Grades, you could create an object view that represents a student object with
related course and grade information. This way, you get the benefits of object-
oriented design without having to convert your entire database.
Object views make it easier to work with complex data and integrate object-oriented
features into existing relational systems.
7. Object Methods
Just like in object-oriented programming, Oracle 10g allows you to define methods for your
object types. These are functions or procedures that operate on the data within the object.
Example: For the Student object type, you could create a method called
calculateGPA() that calculates the student's grade point average based on their
grades.
By encapsulating functionality within objects, methods make your database more modular
and easier to maintain.
8. References
Oracle 10g allows you to create references (REFs) to objects stored in object tables. These
references act like pointers in programming, allowing you to link objects together without
duplicating data.
Example: You could create a reference from a Course object to a Student object,
indicating which students are enrolled in the course. This reduces redundancy and
improves the efficiency of your database.
References enable you to model relationships between objects more naturally, similar to
how you would in an OOP environment.
32
Easy2Siksha
9. Nested Tables and VARRAYs
Oracle 10g supports
nested tables and VARRAYs, which are collections that allow you to store sets of related
objects within a single table column. These are useful for representing one-to-many
relationships or storing lists of related data.
Nested Tables: These are collections where each element is treated as an individual
object. You can store multiple related objects (e.g., a list of a student’s courses) in a
single column.
VARRAYs: These are ordered collections with a fixed maximum size, where the
elements are stored in a defined order.
Both nested tables and VARRAYs allow you to work with groups of related data in a
structured way, making it easier to manage complex relationships.
10. Object Relational Features
Oracle 10g also includes object-relational features, which blend the flexibility of relational
databases with the power of object-oriented design. This allows you to combine the
benefits of both models in your database.
Example: You might use relational tables for simple data and object types for more
complex data structures. Object-relational features allow you to integrate these
different approaches seamlessly, enabling you to choose the best model for each
part of your system.
This hybrid approach gives you more options for designing efficient and maintainable
databases.
11. Object Cache
Oracle 10g provides an object cache, which is an in-memory cache for object instances. This
improves performance by reducing the need to repeatedly query the database for the same
objects.
By caching objects in memory, Oracle 10g can provide quicker access to frequently used
data, reducing the time it takes to retrieve and manipulate objects. This feature is
particularly useful in applications that involve complex data operations.
12. Advanced Queuing
Oracle 10g supports Advanced Queuing (AQ), which allows you to manage message-based
communication between different parts of an application using object types. AQ is a
mechanism that helps handle tasks like sending and receiving messages, processing tasks in
order, and managing work in distributed systems.
33
Easy2Siksha
Example: In a banking system, when a transaction is made, the information can be
queued and processed in sequence. This ensures that transactions are processed
reliably and in the correct order.
By using object types in Advanced Queuing, you can model complex communication and
coordination patterns more effectively.
13. Object Collections
Oracle 10g supports object collections, which allow you to group related objects together in
a set. This can be useful when you want to work with a collection of related items as a
whole rather than dealing with individual objects one by one.
Example: You could have a collection of Student objects that represent all the
students in a particular class. By using object collections, you can manipulate the
entire class of students together, such as updating their grades or generating
reports.
Object collections provide a way to handle multiple related objects more efficiently, making
your code cleaner and easier to manage.
14. User-Defined Constructors
In Oracle 10g, you can create user-defined constructors for object types. A constructor is a
special method that initializes an object when it is created. This allows you to set up an
object with specific values right from the start.
Example: For the Student object, you could create a constructor that sets the name,
age, and grade when a new student object is created. This ensures that every
student object starts with the necessary data.
Constructors simplify the process of creating and initializing objects, making your database
operations more consistent.
15. Comparison Methods
Oracle 10g allows you to define comparison methods for object types, such as equality (=)
and inequality (!=) operators. These methods determine how objects of a particular type are
compared to one another.
Example: For the Student object, you might define a method to compare two
student objects based on their name or grade. This allows you to perform searches
and sorts more easily.
By defining comparison methods, you can customize how your database handles object
comparisons, improving the accuracy and flexibility of your queries.
16. Handling Complex Data
One of the biggest advantages of using Oracle 10g’s object-oriented features is the ability to
handle complex data more naturally. In traditional relational databases, handling data with
34
Easy2Siksha
multiple attributes or relationships can be cumbersome and require a lot of effort to
maintain consistency. Object-oriented features simplify this by allowing you to group
related data and methods together in objects.
Example: In a healthcare database, you could create objects for Patient, Doctor, and
Appointment. Each object would encapsulate its own data and behavior, making it
easier to manage relationships and ensure data integrity.
This approach reduces redundancy and makes your database design more intuitive.
17. Easier Maintenance and Reusability
Oracle 10g’s object-oriented features also make it easier to maintain and reuse code. By
encapsulating data and methods in objects, you can make changes in one place without
affecting other parts of your system. This modular approach allows you to reuse code more
effectively, leading to faster development and fewer errors.
Example: If you need to update the way grades are calculated for students, you only
need to change the calculateGPA() method in the Student object. This change will
automatically apply to all student objects, reducing the need for repetitive updates
across the system.
This leads to a more maintainable and scalable database system, which is especially
important for large and complex applications.
18. Conclusion
In summary, Oracle 10g’s object-oriented features provide a powerful set of tools for
developing and managing complex databases. By incorporating principles like encapsulation,
inheritance, polymorphism, and abstraction, these features allow developers to model real-
world entities more naturally and efficiently. The ability to create object types, use object
tables, define methods, and manage relationships between objects makes it easier to build
sophisticated systems that are easier to maintain and scale.
For anyone working with Oracle 10g, understanding and leveraging these object-oriented
features can significantly improve the design and performance of database-driven
applications. Whether you’re dealing with complex data structures, managing relationships
between entities, or optimizing performance, Oracle 10g’s object-oriented capabilities offer
a robust solution for modern database development.
6. Discuss the basic structure of DDL AND DML to demonstrate the working of SQL script.
Ans: Understanding the Basic Structure of DDL and DML in SQL Scripts
In SQL (Structured Query Language), two major types of commands help you work with
databases: DDL (Data Definition Language) and DML (Data Manipulation Language). These
commands are like tools that let you create, organize, and manage the data stored in
35
Easy2Siksha
databases. Let’s break down these concepts into simpler terms and see how they work in
SQL scripts.
What is DDL (Data Definition Language)?
DDL commands deal with the structure of the database. Think of them as instructions to set
up the framework of your database. DDL commands allow you to create, alter, and delete
database objects like tables, indexes, and views. Here's a closer look at each of these DDL
commands:
1. CREATE: This command is used to create new objects in the database. For example,
if you want to create a new table to store information about students, you would
use the CREATE command.
Example:
Explanation: This SQL script creates a table named Students with columns:
StudentID, FirstName, LastName, and Age.
2. ALTER: The ALTER command modifies an existing object in the database. For instance, if
you decide to add a new column to the Students table, you would use the ALTER command.
Example:
Explanation: This script adds a new column called Gender to the Students table.
3. DROP: This command deletes objects like tables, indexes, or views from the database. Be
cautious when using DROP because it permanently removes the data and structure.
Example:
36
Easy2Siksha
Explanation: This script deletes the entire Students table from the database.
4. TRUNCATE: TRUNCATE is used to delete all the data from a table without removing the
table structure. This is useful when you want to clear out a table but keep it ready for new
data.
Example:
o Explanation: This script removes all the records from the Students table but
keeps the table itself intact.
What is DML (Data Manipulation Language)?
DML commands help you work with the data stored inside the database. Once the structure
of the database is set up using DDL commands, you use DML commands to insert, update,
delete, or retrieve data from the tables. Here are the key DML commands:
1. INSERT: This command adds new data into a table. For example, to add a new
student to the Students table, you would use the INSERT command.
Example:
Explanation: This script inserts a new row into the Students table with the specified
values.
2. SELECT: The SELECT command retrieves data from the database. If you want to see the
list of students stored in the Students table, you would use SELECT.
Example:
Explanation: This script retrieves all columns (*) and rows from the Students table.
3. UPDATE: This command modifies existing data in the database. For instance, if you need
to update the age of a student, you would use UPDATE.
37
Easy2Siksha
Example:
Explanation: This script updates the Age of the student with StudentID = 1 to 21.
4. DELETE: This command removes data from a table. If you need to delete a specific
student’s record, you would use DELETE.
Example:
o Explanation: This script deletes the record of the student with StudentID = 1.
Demonstrating the Working of SQL Script
An SQL script is a set of SQL commands written to perform a particular task. To demonstrate
the working of DDL and DML, let's go through an example scenario step by step:
Step 1: Creating the Table (DDL - CREATE)
First, you need to create a table to store your data. In this case, let’s create a table named
Employees:
Explanation: This script sets up the framework for storing employee information,
with columns like EmployeeID, FirstName, LastName, Position, and Salary.
Step 2: Adding Data (DML - INSERT)
Once the table is ready, you can start adding data to it:
38
Easy2Siksha
Explanation: This script inserts a new employee record into the Employees table.
Step 3: Retrieving Data (DML - SELECT)
Now, suppose you want to view the employees in your database:
Explanation: This script retrieves all records from the Employees table, showing the
data you inserted.
Step 4: Modifying Data (DML - UPDATE)
If you need to update an employee’s information, such as giving Alice a raise:
Explanation: This script updates Alice’s salary to 65,000.
Step 5: Removing Data (DML - DELETE)
If Alice decides to leave the company, you would delete her record:
Explanation: This script deletes Alice’s record from the Employees table.
Step 6: Modifying the Table Structure (DDL - ALTER)
Later, if you want to add a new column to track employee hire dates:
Explanation: This script adds a new column HireDate to the Employees table.
39
Easy2Siksha
Step 7: Deleting the Table (DDL - DROP)
Finally, if you no longer need the Employees table, you can delete it:
Explanation: This script removes the table and all its data from the database.
Conclusion
DDL and DML are two essential parts of SQL that help you work with databases. DDL
commands (like CREATE, ALTER, and DROP) are used to define and manage the structure of
the database, while DML commands (like INSERT, SELECT, UPDATE, and DELETE) allow you
to interact with the data stored within that structure.
By using these commands in SQL scripts, you can create, modify, and manage databases
efficiently. The ability to use DDL and DML effectively is crucial for anyone working with
databases, as it allows you to set up the right structures and manipulate the data according
to your needs.
With this understanding, you can now write SQL scripts that perform a wide range of tasks,
from setting up databases to managing the information stored in them.
SECTION-D
7.(a) What are types of cursors? Explain the working of implicit cursor by taking some
suitable examples.
Ans: Understanding Cursors in Database Management Systems
In database management systems, a cursor is a tool that allows you to fetch and manipulate
data from a result set one row at a time. Think of a cursor as a pointer that helps you
navigate through a set of data, such as a list of students or employees, row by row, so that
you can process each entry individually.
Types of Cursors
Cursors can be broadly categorized into two types: Implicit Cursors and Explicit Cursors.
1. Implicit Cursors:
o These are automatically created by the database system whenever an SQL
statement (like a SELECT, INSERT, UPDATE, or DELETE) is executed.
40
Easy2Siksha
o Implicit cursors are simple and require less coding effort from the
programmer, as the database system handles most of the tasks
automatically.
o They are mostly used when you only need to handle one row at a time.
2. Explicit Cursors:
o These are defined by the programmer in the code. You explicitly create them
to handle complex operations where you need to process multiple rows or
execute complex logic.
o Explicit cursors give more control to the programmer, allowing them to open,
fetch, and close the cursor manually.
o They are used when you need to process multiple rows or perform
operations that require more control over the result set.
Implicit Cursors:
Let's dive deeper into Implicit Cursors as per the focus of your question.
What is an Implicit Cursor?
An implicit cursor is automatically created by the Oracle database whenever you execute an
SQL statement. You don't need to declare it explicitly in your PL/SQL code. The database
system takes care of opening, fetching, and closing the cursor automatically. This simplicity
makes implicit cursors easy to use, especially for operations that involve a single row of
data.
When Are Implicit Cursors Used?
Implicit cursors are typically used in the following situations:
1. SELECT INTO Statements:
o When you use a SELECT INTO statement to fetch a single row of data from a
table, an implicit cursor is used behind the scenes.
2. DML Statements:
o For data manipulation language (DML) operations like INSERT, UPDATE, or
DELETE, the database automatically uses an implicit cursor to process the
operation.
Working of Implicit Cursors: Step by Step
To understand how implicit cursors work, let’s break down the process:
1. SQL Statement Execution:
o When an SQL statement (such as SELECT, INSERT, UPDATE, or DELETE) is
executed in PL/SQL, the Oracle database creates an implicit cursor.
41
Easy2Siksha
o For example, if you write a SELECT INTO statement to fetch a single record
from the employees table, an implicit cursor is created.
2. Opening the Cursor:
o The database automatically opens the cursor to retrieve data. You don’t need
to manually open the cursor as you would with an explicit cursor.
3. Fetching Data:
o The cursor fetches the data from the result set. Since implicit cursors are
used for single-row operations, the cursor fetches only one row at a time.
4. Closing the Cursor:
o After fetching the data, the cursor is automatically closed by the database.
Again, this is handled by the system, so you don’t need to close it manually.
Example: Using Implicit Cursors in a SELECT INTO Statement
Let’s look at a simple example to illustrate how implicit cursors work in a SELECT INTO
statement.
Imagine you have an employees table with columns employee_id, first_name, last_name,
and salary. You want to fetch the salary of an employee whose employee_id is 101.
Here’s how you would write the PL/SQL code using an implicit cursor:
Explanation:
The SELECT INTO statement fetches the salary of the employee with employee_id =
101.
The database creates an implicit cursor to handle this operation.
The cursor automatically opens, fetches the salary, and closes after retrieving the
data.
The fetched salary is stored in the v_salary variable, which is then printed using
DBMS_OUTPUT.PUT_LINE.
42
Easy2Siksha
Implicit Cursor Attributes
Oracle provides several attributes for implicit cursors that you can use to get more
information about the operation. These attributes are prefixed with SQL% and include:
1. SQL%ROWCOUNT:
o Returns the number of rows affected by the SQL statement.
o Example: If an UPDATE statement modifies 5 rows, SQL%ROWCOUNT would
return 5.
2. SQL%FOUND:
o Returns TRUE if the SQL statement affected one or more rows, and FALSE
otherwise.
o Example: If a SELECT INTO statement retrieves a row, SQL%FOUND would be
TRUE.
3. SQL%NOTFOUND:
o Returns TRUE if the SQL statement did not affect any rows, and FALSE
otherwise.
o Example: If a DELETE statement doesn't find any rows to delete,
SQL%NOTFOUND would be TRUE.
4. SQL%ISOPEN:
o Always returns FALSE for implicit cursors because they are automatically
closed after execution.
o Example: Since implicit cursors are automatically managed, SQL%ISOPEN is
not usually used with them.
Example: Using Implicit Cursor Attributes
Let’s extend the previous example to use some of the implicit cursor attributes:
43
Easy2Siksha
Explanation:
After the SELECT INTO statement, we check if any row was fetched using
SQL%FOUND.
If SQL%FOUND is TRUE, the salary is printed. Otherwise, a message indicating that
no employee was found is displayed.
SQL%ROWCOUNT is used to print the number of rows fetched, which will be 1 if a
row is found or 0 if no rows are fetched.
Advantages of Implicit Cursors
1. Simplicity:
o Implicit cursors are easy to use and require less coding effort, making them
ideal for simple operations.
2. Automatic Management:
o The database handles opening, fetching, and closing the cursor automatically,
reducing the chances of errors.
3. Performance:
o Since implicit cursors are designed for single-row operations, they are
optimized for performance in such scenarios.
Disadvantages of Implicit Cursors
1. Limited Control:
o Implicit cursors offer limited control compared to explicit cursors. You can’t
manually open or close them, and they are not suitable for processing
multiple rows.
2. Lack of Flexibility:
o For complex operations that require processing multiple rows or performing
advanced logic, implicit cursors are not sufficient.
Conclusion
In summary, implicit cursors are a convenient and straightforward tool for handling single-
row operations in Oracle databases. They are automatically managed by the database
system, making them easy to use, especially for simple queries like SELECT INTO. While they
offer simplicity and ease of use, they are not suitable for more complex operations that
require processing multiple rows or performing advanced logic.
By understanding how implicit cursors work and when to use them, you can write efficient
and effective PL/SQL code for your database applications.
44
Easy2Siksha
(b) Explain the basic structure used for a procedure using an example code snippet to
justify the answer.
Ans: Understanding the Basic Structure of a Procedure with an Example Code Snippet
In programming, a procedure is a block of code that performs a specific task. Think of it as a
mini-program within a larger program. In the context of databases, particularly in Oracle's
PL/SQL (Procedural Language/Structured Query Language), procedures help automate tasks,
such as updating records, validating data, or performing complex calculations. By
encapsulating a series of steps into one reusable block, procedures make your code more
organized and efficient.
What is a Procedure?
A procedure is a subprogram in PL/SQL that performs a particular operation. You can think
of it as a function that doesn’t return a value but still carries out tasks such as inserting,
updating, or deleting records in a database. Once defined, you can call this procedure
multiple times in your code, which saves you from writing the same code over and over
again.
Why Use Procedures?
1. Code Reusability: Once you write a procedure, you can reuse it as many times as you
need, in different parts of your program.
2. Code Maintenance: If you need to update or fix the logic, you only need to do it in
one placethe procedure itself.
3. Modular Code: Procedures allow you to break down complex logic into smaller,
more manageable pieces, making your code more readable.
4. Security: You can encapsulate business logic within procedures, ensuring that only
valid data is processed. This helps in securing the database.
Basic Structure of a Procedure in PL/SQL
A procedure generally has the following structure:
1. Header: This is where you define the procedure’s name and parameters (if any).
Parameters are the values that you can pass into the procedure to customize its
operation.
2. Declaration Section: Optional section where you declare variables, constants, or
other procedures/functions.
3. Executable Section: The core part of the procedure where the main logic or
operations are carried out. This section can contain SQL statements like SELECT,
INSERT, UPDATE, or DELETE.
45
Easy2Siksha
4. Exception Section: This is an optional section where you handle errors. If something
goes wrong during the execution, the code here helps manage those errors
gracefully.
5. End: This marks the end of the procedure.
Here is a breakdown of these sections with an example code snippet.
Example Code Snippet: Creating a Simple Procedure
Let’s go through this procedure step by step.
1. Header
CREATE OR REPLACE: This tells the database that you are creating a new procedure.
If a procedure with the same name already exists, it will be replaced with this new
version.
Procedure Name: Here, the procedure is called update_employee_salary. The name
should be meaningful, describing the purpose of the procedure.
Parameters: The procedure accepts two parameters:
o emp_id (type NUMBER): This represents the ID of the employee whose
salary will be updated.
46
Easy2Siksha
o new_salary (type NUMBER): This is the new salary that will be assigned to
the employee.
The IN keyword specifies that these parameters are input parameters, meaning values must
be provided when calling the procedure.
2. Declaration Section
In this procedure, there is no declaration section. If needed, this is where you would declare
variables, constants, or even other procedures or functions to be used within the procedure.
3. Executable Section
BEGIN: This keyword marks the start of the executable section.
UPDATE Statement: The core logic of this procedure is an SQL UPDATE statement
that changes the salary of an employee in the employees table based on the
employee_id.
COMMIT: This ensures that the changes made by the UPDATE statement are saved
permanently in the database.
4. Exception Section
EXCEPTION: This section handles errors that might occur during the execution of the
procedure.
NO_DATA_FOUND: This exception is raised when the UPDATE statement fails to find
an employee with the given ID. In this case, a message is printed using
DBMS_OUTPUT.PUT_LINE.
OTHERS: This is a catch-all exception that handles any other unexpected errors.
47
Easy2Siksha
5. End
END: This keyword marks the end of the procedure.
Calling the Procedure
After creating the procedure, you can call it like this:
This will update the salary of the employee with ID 101 to 50,000.
Key Concepts Explained
1. CREATE OR REPLACE: This is useful because if you want to modify an existing
procedure, you don't need to drop it first; you can simply replace it with the updated
version.
2. Parameters: Parameters allow procedures to be dynamic. Instead of hardcoding
values, you can pass them when calling the procedure. For example, instead of
always updating the salary of one specific employee, you can use a different emp_id
and new_salary every time you call the procedure.
3. BEGIN...END: These keywords define the boundaries of the executable part of the
procedure. Everything between BEGIN and END is the code that gets executed when
the procedure is called.
4. Exception Handling: This is an important part of PL/SQL procedures because it helps
manage errors gracefully. Without exception handling, your procedure might fail
silently or crash the entire program. Handling specific exceptions (like
NO_DATA_FOUND) allows you to provide meaningful error messages to users or
take corrective actions.
5. COMMIT: In databases, a COMMIT ensures that changes are saved. If you update a
record but don’t commit, those changes won’t be stored in the database. However,
committing after every small change may not always be a good idea, especially in
larger transactions, as it can lead to performance issues. It’s important to use
COMMIT wisely.
Variations in Procedures
Procedures can get more complex, depending on the task. Here are a few variations:
48
Easy2Siksha
1. Procedures with OUT Parameters: Sometimes, you might want a procedure to not
only perform a task but also return some data. In this case, you can use OUT
parameters.
Here, emp_salary is an OUT parameter, meaning the procedure will output the employee's
salary based on the provided emp_id.
2. Procedures with Multiple SQL Statements: A procedure doesn’t have to contain just one
SQL statement. It can perform multiple operations:
1. This procedure not only updates the employee’s salary and department but also logs
the salary change in a separate table.
Conclusion
Procedures in PL/SQL allow you to encapsulate logic into reusable blocks, making your code
more modular, maintainable, and secure. The basic structure includes a header, an optional
declaration section, an executable section, and an optional exception handling section.
49
Easy2Siksha
Procedures can accept parameters, execute SQL statements, and handle errors, making
them a powerful tool in database management.
In this explanation, we’ve used a simple procedure to update an employee’s salary as an
example, but the possibilities with procedures are endless. You can use them to perform
calculations, validate data, automate repetitive tasks, and much more. As your database
applications grow in complexity, using procedures will help you keep your code organized
and efficient.
8. Explain the following concepts for PL/SQL:
(a) Database triggers
(b) Explicit cursor.
Ans: PL/SQL Concepts: Database Triggers and Explicit Cursors Simplified
Introduction to PL/SQL
PL/SQL (Procedural Language for SQL) is an extension of SQL used in Oracle databases. It
combines SQL's data manipulation power with procedural programming features, making it
a more powerful tool for working with databases. Two essential concepts in PL/SQL that
enhance database functionality are database triggers and explicit cursors. Let’s break them
down in simple terms.
(a) Database Triggers
1. What are Database Triggers?
A database trigger is like an automatic alarm system in your database. It automatically
executes a predefined action in response to certain events happening in your database. For
example, when someone adds a new record to a table, updates existing data, or deletes a
record, a trigger can be set to take action based on these events. Think of it as setting up
rules or conditions that trigger an action without the need for manual intervention.
2. Why Use Database Triggers?
Triggers are useful for:
Data Validation: Automatically checking and ensuring data integrity before it is
saved in the database.
Auditing: Tracking changes made to the data, such as who modified it and when.
Enforcing Business Rules: Implementing business logic within the database itself, like
preventing an employee from being assigned more than one role.
3. How Do Triggers Work?
Triggers are connected to specific events in your database tables. These events can be:
50
Easy2Siksha
INSERT: Triggered when a new record is added to the table.
UPDATE: Triggered when an existing record is modified.
DELETE: Triggered when a record is removed from the table.
You can set triggers to run before or after these events. For example, a trigger could check
data before it is inserted (a BEFORE INSERT trigger), or it could log an action after a record is
updated (an AFTER UPDATE trigger).
4. Example of a Database Trigger
Imagine you have a table called employees. You want to ensure that any time an employee’s
salary is updated, the old salary is recorded in another table for auditing purposes. You can
create a trigger that automatically saves the old salary before the update happens:
In this example:
The trigger is called before_salary_update.
It is set to trigger before an update to the salary column in the employees table.
It inserts the old salary and employee ID into the salary_audit table before the
update happens.
5. Types of Triggers
There are two main types of triggers:
Row-level triggers: These triggers are executed once for every row affected by the
triggering event. For example, if 10 rows are updated, the trigger will fire 10 times.
Statement-level triggers: These triggers are executed once for the entire SQL
statement, regardless of how many rows are affected. For example, even if 10 rows
are updated, the trigger will only fire once.
(b) Explicit Cursors
1. What are Explicit Cursors?
In PL/SQL, a cursor is a pointer that allows you to retrieve and process data row by row from
a result set (a set of rows returned by an SQL query). An explicit cursor is one that you
manually create and control in your PL/SQL code.
51
Easy2Siksha
Think of a cursor as a way of moving through the rows in a result set, one by one. Explicit
cursors give you more control because you can open, fetch, and close them manually. This is
different from implicit cursors, which are automatically created by PL/SQL whenever you
run a SELECT statement that returns only one row.
2. Why Use Explicit Cursors?
Explicit cursors are useful when:
You need to fetch multiple rows from a query and process them one at a time.
You want to control the flow of fetching data, like skipping rows, stopping fetching
after certain conditions, etc.
You need to perform complex operations on each row of data returned by a query.
3. How Do Explicit Cursors Work?
The process of working with an explicit cursor involves four main steps:
1. Declare the cursor: You define a cursor with a SELECT statement to retrieve the data.
2. Open the cursor: This step executes the query associated with the cursor and makes
the result set available for fetching.
3. Fetch data from the cursor: You retrieve rows from the result set, one at a time,
using the FETCH statement.
4. Close the cursor: After you’ve processed all the rows, you close the cursor to free up
resources.
4. Example of an Explicit Cursor
Suppose you want to retrieve and process the names of all employees who work in a
specific department. You could write a PL/SQL block using an explicit cursor:
52
Easy2Siksha
In this example:
The cursor emp_cursor is declared to select the names of employees from the
employees table where the department ID is 10.
The cursor is opened, and then a loop is used to fetch each employee's name into
the variable emp_name.
The loop continues until there are no more rows to fetch
(emp_cursor%NOTFOUND).
Finally, the cursor is closed.
5. Explicit Cursor Attributes
Cursors come with special attributes that help you control the flow of data. Some common
attributes include:
%FOUND: Returns TRUE if the last fetch operation returned a row.
%NOTFOUND: Returns TRUE if the last fetch did not return a row.
%ROWCOUNT: Returns the number of rows fetched so far.
%ISOPEN: Returns TRUE if the cursor is currently open.
6. Advantages of Explicit Cursors
Row-by-row Processing: They allow you to process multiple rows of data one at a
time, which is useful in scenarios where you need to perform operations on
individual rows.
Fine-grained Control: You have control over when to open, fetch, and close the
cursor, making it flexible for complex logic.
7. When to Use Explicit Cursors
Explicit cursors are most useful when you are working with queries that return multiple
rows and you need to process each row individually. They are also helpful when you need to
perform multiple operations on the result set, such as calculating totals or validating data.
Conclusion
Database triggers and explicit cursors are two powerful features of PL/SQL that enhance the
functionality of databases. Triggers automate actions based on specific events in the
database, making your applications more efficient and secure. Explicit cursors give you the
ability to handle and process query results manually, providing greater control over data
retrieval and manipulation.
Understanding these concepts can significantly improve your ability to write more dynamic
and responsive database applications. Both tools are essential for managing complex
operations within an Oracle database, ensuring that data integrity and business rules are
maintained automatically.
53
Easy2Siksha